了解出版物思想的起源和影响对于进行科学研究至关重要。但是,科学出版物的扩散使研究人员难以弄清所有相关文献的演变。为此,我们介绍了Ideareader,这是一种机器阅读系统,发现哪些论文最有可能激发或受到目标出版物的影响,并以自然语言总结了这些论文的想法。具体而言,Ideareader首先将目标出版物的参考和引用(一阶或高阶)和所获得的集群视为激发或受到目标出版物影响的主题。然后,它从每个集群中挑选出重要的论文来提取想法流的骨骼。最后,Ideareader会自动生成对每个主题中重要论文的文献综述。我们的系统可以帮助研究人员深入了解科学思想如何通过自动生成的调查和想法流的可视化来从目标出版物的引用引用。
translated by 谷歌翻译
我们提出了一种新颖的方法来重新定位或放置识别,这是许多机器人技术,自动化和AR应用中要解决的基本问题。我们不依靠通常不稳定的外观信息,而是考虑以局部对象形式给出参考图的情况。我们的本地化框架依赖于3D语义对象检测,然后与地图中的对象关联。可能的配对关联集是基于评估空间兼容性的合并度量的层次聚类而生长的。后者特别使用有关​​相对对象配置的信息,该信息相对于全局转换是不变的。随着相机逐步探索环境并检测更多对象,关联集将进行更新和扩展。我们在几种具有挑战性的情况下测试我们的算法,包括动态场景,大型视图变化以及具有重复实例的场景。我们的实验表明,我们的方法在鲁棒性和准确性方面都优于先前的艺术。
translated by 谷歌翻译
基于草图的3D形状检索(SBSR)是一项重要但艰巨的任务,近年来引起了越来越多的关注。现有方法在限制设置中解决了该问题,而无需适当模拟真实的应用程序方案。为了模仿现实的设置,在此曲目中,我们采用了不同级别的绘图技能的业余爱好者以及各种3D形状的大规模草图,不仅包括CAD型号,而且还可以从真实对象扫描的模型。我们定义了两个SBSR任务,并构建了两个基准,包括46,000多个CAD型号,1,700个现实型号和145,000个草图。四个团队参加了这一轨道,并为这两个任务提交了15次跑步,由7个常用指标评估。我们希望,基准,比较结果和开源评估法会在3D对象检索社区中促进未来的研究。
translated by 谷歌翻译
Subject to the huge semantic gap between natural and formal languages, neural semantic parsing is typically bottlenecked by its complexity of dealing with both input semantics and output syntax. Recent works have proposed several forms of supplementary supervision but none is generalized across multiple formal languages. This paper proposes a unified intermediate representation (IR) for graph query languages, named GraphQ IR. It has a natural-language-like expression that bridges the semantic gap and formally defined syntax that maintains the graph structure. Therefore, a neural semantic parser can more precisely convert user queries into GraphQ IR, which can be later losslessly compiled into various downstream graph query languages. Extensive experiments on several benchmarks including KQA Pro, Overnight, GrailQA, and MetaQA-Cypher under standard i.i.d., out-of-distribution, and low-resource settings validate GraphQ IR's superiority over the previous state-of-the-arts with a maximum 11% accuracy improvement.
translated by 谷歌翻译
新颖性检测旨在自动识别分销(OOD)数据,而无需任何先验知识。它是数据监视,行为分析和其他应用程序中的关键步骤,帮助在现场中保持不断学习。常规的OOD检测方法对数据或特征的集合进行多变化分析,通常诉诸于数据的监督,以提高准确性。实际上,这种监督是不切实际的,因为人们不能预料到异常数据。在本文中,我们提出了一种小说,自我监督的方法,不依赖于任何预定义的OOD数据:(1)新方法评估梯度之间的分布和OOD数据之间的Mahalanobis距离。 (2)通过自我监督的二进制分类器辅助,以指导标签选择以生成梯度,并最大化Mahalanobis距离。在具有多个数据集的评估中,例如CiFar-10,CiFar-100,SVHN和TINIMAGENET,所提出的方法始终如一地优于接收器操作特征(AUROC)和区域下的区域内的最先进的监督和无监督的方法在精密召回曲线(AUPR)度量下。我们进一步证明,该探测器能够在持续学习中准确地学习一个OOD类。
translated by 谷歌翻译
在所提出的Sehybridsn模型中,使用密集块来重用浅特征,并旨在更好地利用分层空间谱特征。随后的深度可分离卷积层用于区分空间信息。通过通道注意方法实现了空间谱特征的进一步改进,该方法在每个3D卷积层和每个2D卷积层后面进行。实验结果表明,我们所提出的模型使用很少的训练数据了解更多辨别的空间谱特征。Sehybridsn使用仅0.05和0.01个标记的训练数据,获得了非常令人满意的性能。
translated by 谷歌翻译
在知识库(复杂KBQA)上回答的复杂问题是具有挑战性的,因为它需要各种组成推理功能,例如多跳推断,属性比较,集合操作。现有的基准有一些缺点,这些缺点限制了复杂的KBQA的发展:1)它们仅提供质量检查对而没有明确的推理过程; 2)问题的多样性或规模很差。为此,我们介绍了KQA Pro,这是一个用于复杂KBQA的数据集,包括〜120k多样化的自然语言问题。我们引入了一种构图和可解释的编程语言KOPL,以表示复杂问题的推理过程。对于每个问题,我们都提供相应的KOPL程序和SPARQL查询,因此KQA Pro可用于KBQA和语义解析任务。实验结果表明,SOTA KBQA方法无法像当前数据集上的KQA Pro上实现有希望的结果,这表明KQA Pro具有挑战性,复杂的KBQA需要进一步的研究工作。我们还将KQA Pro视为用于测试多种推理技能的诊断数据集,对现有模型进行彻底评估,并讨论复杂KBQA的进一步说明。我们的代码和数据集可以从https://github.com/shijx12/kqapro_baselines获得。
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译